Search Results: "Vincent Sanders"

24 May 2012

Vincent Sanders: Interrupt Service Routines

Something a little low level for this post. I have been asked recently how to "test" for the maximum duration of an Interrupt Service Routine (ISR) in Linux

To do this I probably ought to explain what the heck an ISR is!

A CPU executes one instruction after another and runs your programs. However early in the history of the electronic computer it soon became apparent that sometimes there were events happening, generally caused by a hardware peripheral, that required some other code to be executed without having to wait for the running program to check for the event.

This could have been solved by having a second processor to look after those exceptional events but that would have been expensive, difficult to synchronize and the designers took the view that there was a perfectly good processor already sat there just running some users program. This Interruption in the code flow became known as, well, an Interrupt (and the other approach as polling).

The hardware for supporting interrupts started out very simply, the processor would complete execution of the current instruction and when the Program Counter (PC) was about to be incremented if an Interrupt ReQest (IRQ) was pending the PC would be stored somewhere (often a "special" IRQ stack or register) and then execution started at some fixed address.

The interrupting event would be dealt with by some code and execution returned to the original program without it ever knowing the CPU just wandered off to do something else. The code that deals with the interrupt is known as the Interrupt Service Routine (ISR).

Now I have glossed over a lot of issues here (sufficient to say there are a huge number of details in which to hide the devil) but the model is good enough for my purpose. A modern CPU has a extraordinarily complex system of IRQ controllers to deal with numerous peripherals requesting the CPU stop what its doing and look after something else.

This system of controllers will ultimately cause program execution to be delivered to an ISR for that device. If we were living in the old single thread of execution world we could measure how long execution remains within an ISR, perhaps by using a physical I/O line as a semaphore and an external oscilloscope to monitor the line.

You may well ask "Why measure this?" well historically while the ISR was running nothing else could interrupt it executing which meant even if there was an event that was more important it would not get the CPU until the first ISR was complete. This was known as IRQ latency which was undesirable if you were doing something that required an IRQ to be serviced in a timely manner (like playing audio)

This is no longer how things are done while the top half runs with IRQ disabled many are threaded interrupt handlers and are preemptable (I.e. can be interrupted themselves) which leads to the first issue with measuring ISR time in that the ISR may be executed in multiple chunks if something more important interrupts. Indeed it may appear an ISR has taken many times longer one time than another because the CPU has been off servicing multiple other IRQ.

Then we have the issue that Linux kernel drivers often do as little as possible within their ISR, often only as much as is required to clear the physical interrupt line. Processing is then continued in a "bottom half" handler this leads to ISR which take practically no time to execute but processing is still being caused elsewhere in the system.

The next issue is the world is not uniprocessor any more, how many processors does a machine have these days? even a small ARM SoC can often have two or even four cores. This makes our timing harder because it is now possible to be servicing multiple interrupts from a single peripheral on separate cores at the same time!

In summary measuring ISR execution time is not terribly enlightening and almost certainly not what you are interested in. The actual question is much more likely that you really want to be examining something that the ISR time was an historical proxy for like IRQ latency or system overheads in locking.

Vincent Sanders: Linux kernel presentation

Recently I was asked to present a short introduction to the Linux kernel for our project managers. I put together a short slide deck for the presentation which I have decided to share.
I feel its important to note that I had a lot more to say about each section and the slides were more an aid for my memory to cover the important points. Of special note would be the diagram showing the "hierarchy" of contributors, this is of course nowhere near as well stratified as portrayed.

8 May 2012

Vincent Sanders: NetSurf at a show

The wakefield RISC OS show is an event the NetSurf project has attended for a long time. in fact since 2005 when the "stand" was a name on an A4 sheet through 2006, 2007, 2008, 2009, 2010 to 2011 we have always been present.

The event has changed in that time from a large affair with many exhibitors to a small specialist interest event with a handful of stands. I took some pictures this year which give a fair impression of the event.

We were seriously considering not attending this year as 2011 had seen us barely break even on donations versus expenses to attend. However we decided that the projects annual Grey Ox Inn post event dinner was probably worth making the effort.

So we all met up in a hotel just off the M1 near Wakefield and set up our table. And although NetSurf as a project now has much more usage on other platforms we still represent the principle browser for the RISC OS platform!

We had a pleasant time, talked to a lot of users and made our expenses back in donations. Overall an amusing Saturday. Based on the size of the event and number and age of the attendees, I fear the RISC OS may be destined for the history books.

Vincent Sanders: Repaying a debt

Some debts are merely financial and some easily repaid but some require repayment in kind . Few debts are more important to me personally than a favour earned by a good friend.

Several years ago, before I started this blog, I replaced the kitchen in my house. Finances were tight at the time and I had to do the entire refit with only limited professional help. Because of this I imposed upon Mark Hymers and Steve Gran to come and assist me. They worked tirelessly for three days over a bank holiday for no immediate reward.

Mark and Steve with a drill
This weekend I had the opportunity to assist Mark with his own kitchen refit and reply my debt.

Although the challenges have been different on this build they were, nonetheless present, including walls which were most definitely not square and affixing cabinets 10mm too high so the doors could not close.

We also got to make a hole for a 125mm extractor which was physically demanding and not a little tiring (Steve actually wielding the drill had a fabulous aim)

I took some photos to document the process which has resulted in an image which is positively threatening, though the two of them are nice people really!

All in all a pleasant weekend with friends, the whole favour thing was really moot, I would have done it for a friend anyway.

29 March 2012

Vincent Sanders: Failing to avoid the spotlight

I am usually fortunate at conferences and aside from the obligatory "group photo" manage to completely avoid being in photos and videos of the event. However recently I attended the Linaro connect event in San Francisco and somehow got volunteered to be on a panel.

Now usually such panels are fine and you get a bit of notice and can at least get some basic ideas and no-one bothers to film them. This time though I was not so fortunate and with no notice the overweight greying old fart has been captured on video.

So here for your viewing pleasure is the panel discussion on "Is the GNU user space dying?". I should warn readers of a sensitive disposition that I appear fully dressed and awake(ish) in this video so viewer discretion is advised.
<object class="BLOGGER-youtube-video" classid="clsid:D27CDB6E-AE6D-11cf-96B8-444553540000" codebase="http://download.macromedia.com/pub/shockwave/cabs/flash/swflash.cab#version=6,0,40,0" data-thumbnail-src="http://3.gvt0.com/vi/_HGrCdFA7L8/0.jpg" height="266" width="320"><param name="movie" value="http://www.youtube.com/v/_HGrCdFA7L8&amp;fs=1&amp;source=uds"><param name="bgcolor" value="#FFFFFF"><embed height="266" src="http://www.youtube.com/v/_HGrCdFA7L8&amp;fs=1&amp;source=uds" type="application/x-shockwave-flash" width="320"></embed></object>

26 March 2012

Vincent Sanders: NetSurf Developer Workshop

NetSurf DevelopersOver the weekend we held the NetSurf developer workshop. The event was kindly hosted by Collabora at their Cambridge offices. The provided facilities were agreed to be excellent and contributed to the success of the event.

Five developers managed to attend from around the UK John-Mark Bell, Vincent Sanders, Michael Drake, Daniel Silverstone and Rob Kendrick. In addition James Shaw, one of the project founders, made a special appearance on Saturday.

Starting from Friday afternoon we each put in over 25 hours of actual useful work and made almost 170 commits changing over 350 files, added 10,000 new lines of code and altered another 18,000.

The main aim of the event was to make the transition from using libxml2 to our own libdom library for the browser DOM tree. This was done to improve the browsers performance and size when manipulating the DOM but also gives us the ability to extend the browsers features to include dynamic rendering and Javascript

We also took the opportunity to discuss and plan other issues including:
  • User interface message handling and translation.
  • User preference handling.
  • Toolchain support.
  • Disc caching.
  • Javascript binding
  • Electronic book content handler.
Rob tackled the first parts of the messages conversion from numerous separate files into a single easy to handle file which will in future allow for easier translation and reduce message proliferation.

We made decisions on the ongoing rework of user preference handling which will be implemented in future.

The decision on the toolchain was slightly changed to be that any core or library code (non frontend/toolkit specific parts) are required to conform to the C99 standard. Frontends are permitted to recommend and use whatever tools their maintainer selects but they cannot enforce those restrictions on core code. This issue is principly because the BeOS maintainer is compiling NetSurf with g++ 2.95 which is missing several important language features we wish to use.

Developers at workThe recurring issue of disc caching was raised again and we have come up with what we hope is a reasonably elegant solution to be implemented over the forthcoming months.

Now there is a suitable DOM to bind against, the existing Javascript engine support will be properly integrated and should result in basic script support before the 3.0 release. This support will remain a build time option so NetSurf can continue to be used on platforms where the interpreter is too resource intensive to be used.

A short discussion about the possibility of integrating a basic page based content handler for epub and mobi type documents was discussed and while the idea was well received no decisions on implementation were made.

Overall the event was a resounding success and we are left with only a small handful of regressions which appear straightforward to solve. We also have a clear set of decisions on what we need to do to improve the browser.

12 March 2012

Vincent Sanders: Time flies when you are having fun.

It has been months since I last put something here, so I think that requires a quick catchup.

The family Christmas was a brilliantly restful affair mostly spent doing nothing at home, just slightly tinged with anticipation of starting the new job.

I arranged to rent a room in Cambridge with just a moderate three mile walk to work. This has meant that after a decade of my daily commute being the steps downstairs to my desk I am now walking six miles a day!

Because of this unexpected physical exertion I seem to be slowly loosing weight instead of gaining it. Alas there is still a long way to go before I am my recommended weight (unless I gain three feet in height ;-)

Work has been fabulous, lots of great people doing interesting stuff. I was here only a month before I got sent to San Francisco for the Linaro connect event. Though getting on the outbound plane amidst the worst snowstorm in recent times was both tiring and not a little stressful.

The waking up at 03:00 to get the 04:00 coach from Cambridge to Heathrow would not have been too much of an issue If I had managed to travel down from Leeds and arrive before 02:00. The coach was so much fun that I arrived at 08:40 just as check in was closing for my 09:45 flight.

This was my first experience of San Francisco (although I have been to LA and Portland previously) and while most of the time was spent out in Redwood city at the conference venue Robert did take us for cocktails, comedy and cable cars which was a wonderful night out.

Since my return from the US I have also attended the Debian Bug Squashing Party in Cambridge and had a thoroughly amusing time with many of the usual suspects though I was encouraged to see a few new faces about too.

The commute up and down the country is getting tedious and seems to vary between taking two and four hours depending on traffic. This is encouraging me to consider moving the family as soon as I can. They are all doing great and seem to be thriving despite my absence during weekdays.

I hope to put finger to keyboard here a bit more regularly in the forthcoming weeks though a lot of my personal time is being swallowed with commuting and not being directed towards my open source pursuits.

9 December 2011

Vincent Sanders: And one man in his time plays many parts

Perhaps the immortal bard was making a more noble and deep reference to the stages of ones life but I feel no compunction lifting the line for my purpose.

And it is about changes in my own life I wish to speak. I have been employed by Simtec for more than a decade now. There have been great changes in the embedded electronics industry in that time, especially for those using open source software.

A decade ago we were considered cutting edge and strange for advocating and using NetBSD and Linux, especially for our committing of our changes back upstream, today that is considered normal behaviour.

Alas the last couple of years have shown that Simtec is not the best place for me to continue, so I have decided to move on to pastures new. From the new year I will be working with a great bunch of people at Collabora.

This should not have any great detrimental impact on my open source activities but as with any big life change, it may take a while to settle down and I apologize in advance if I am not as responsive as usual.

30 October 2011

Vincent Sanders: Software that adds another dimension to things

I think that it will come as no surprise to my fellow software engineers if I note that I almost never write new software any more. I maintain, I augment, I refactor, I debug but very, very rarely do I start something new.

This probably has something to do with my maturity as a code monkey, my immediate reaction is to seek out a solution to a problem that already exists and perhaps extend it to fulfil my requirements.

Partially this comes from my innate laziness but also over time I have discovered that I am a "finisher" the role I invariably end up in involves doing all the final bits to make the client accept a project. Because I know my reaction is to always finish something I start, I avoid starting things.

Anyhow, enough introspection, a couple of months ago I was talking on IRC about my 3D printer and was asked "can you print the Debian logo?". So I hunted around for software that would let me convert a bitmap into a suitable 3d format. The result was rather disappointing, the few tools I could find were generally python scripts which simply generated a matrix of cuboids, one for each pixel their heights corresponding to the pixel value.

I used one such script to generate a file for the Debian swirl and imported it into the OpenScad 3d modelling application. I got an inkling of the issues involved after the scene render took over half an hour. The resulting print was blocky and overall I was not terribly happy with the outcome.

So I decided I would write a program to convert images into a 3D representation. I asked myself, how hard can it be?

Sometimes starting from an utterly naive approach with no knowledge of a problem can lead to new insights. In this case I have spent all my free coding time for a month producing a program which I am now convinced has barely even scratched the surface of the possible solutions.

Having said that I have started from a blank editor window and a manual gcc command line compilation and progressed to an actually useful tool and, arguably, of more import to me I have learned and implemented a load of new algorithms which has actually been mentally stimulating and fun!

The basic premise of the tool is to take a PNG image, quantise it into a discrete number of levels , convert that as a height map into a triangle mesh, index that mesh (actually a very hard problem to solve efficiently), simplify the indexed mesh and output the result in a selection of 3D file formats.

The mesh generation alone is a complex field which it appears often devolves into the marching cubes algorithm simply out of despair of anything better ;-) I have failed to implement marching cubes so far (though I have partially implemented marching squares, an altogether simpler algorithm)

The mesh indexing creates an indexed list of vertices from the generated mesh and back annotates it with which faces are connected to which vertices. This effectively generates a useful representation of the meshes topology which can then be used to reduce the complexity of the mesh, or at least describe it. To gain efficiency I implemented my first ever bloom filter as part of my solution. I also learned that generating a plausible hash for said filter is a lot harder than it would seem. In the end I simply used the FNV hash which produces excellent results for very little computation cost.

The mesh simplification area is awash with academic research, most of which I ended up skipping and simply went for the absolute simplest edge removal algorithm. Implementing even this and maintaining a valid mesh topology was challenging.

By comparison the output of the various formats was positively trivial, mainly littered with head scratching over the bizzare de-facto "extensible" formats where only one trivial corner is ever actually implemented.

All in all I have had fun creating the PNG23D project and have actually used it to generate some useful output. I have even printed some of it to generate a lithophane of Turing. I now look forward to several years of maintaining and debugging it and doing all the other things I do instead of writing new software ;-)

22 October 2011

Vincent Sanders: I do not want anything NASty to happen

I have a lot of digital data to store, like most people I have photos, music, home movies, email and lots of other random data. Being a programmer I also tend to have huge piles of source code and builds lying about. If all that was not enough I work from home so I have copious mountains of work data too.

Many years ago I decided I wanted a single robust, backed up, file server for all of this. So I slapped together a machine from leftovers stuffed some drives in a software RAID array, served over NFS and CIFS and never looked back.

Over time the hardware has changed and the system upgraded but the basic approach of a custom built server has remained. When I needed a build engine to churn out hundreds of kernels a day for the ARM Linux autobuilder the system was expanded to cope and mid 2009 the current instantiation was created.

Current full height tower fileserverThe current system is a huge tower case (courtesy of Mark Hymers) containing a Core 2 Quad 2.33GHz (8 threads) with 8Gigabytes of memory and 13 drives across four SATA controllers split into several RAID arrays. Despite buying new drives at higher capacities I have tended to keep the old drives around for extra storage resulting in what you see here.

I recently looked at the power usage of this monster and realised I was paying a lot of money to spin rust which was simply uneconomic. Seriously, why did I have six sub 200Gigabyte drives running when a single 2T to replace them would pay for itself in power saved in under a month! In addition I no longer required the compute power available either, most definitely time for a downsize!

Several friends suggested a HP micro server might be just the thing. After examining and evaluating some other options (Thecus and QNAP NAS) I decided the HP route was most definitely the best value for money.

The HP Proliant micro server is a dual core Athlon II 1.3GHz system with a Gigabyte of memory, space for four SATA hard drives and a single 5 inch bay for an optical drive. All this in a roughly 250mm on a side cube.

My HP proliant microserverI went out and bought the server from ebuyer for 235 with free shipping and 100 cashback. I Immediately sent off the cash back paperwork so I would not forget(what an odd way to get discount) so total cost for the unit was 135. I then used Crucial to select a suitable memory upgrade to take the total to 2 Gigabytes of RAM for 14

The final piece of the solution was the drives for the storage. I decided the best capacity to cost ratio could be had from 2 TB drives and with four bays available would give a raw capacity of 8 TB or more usefully for this discussion 7.8 TiB

I did an experiment with 3x1 TB 7200 RPM drives from the existing server and determined that The overall system would not really benefit enough to justify the 50% price premium of 7200 RPM drives over 5400 RPM devices. I ended up getting four Samsung Spinpoint F4EG 2 TB drives for 230.

I also bought a black LG DVD-RW drive for 16 I would have also required a SATA data cable and a molex to SATA power cable if I had not already got them.

My HP microserver with the front door openPutting the components together was really simple. The internal layout and design of the enclosure mean it is easy to work with and has the feel of build quality I usually associate with HP and IBM server kit not something this small and inexpensive.

The provided documentation is good but unnecessary as most operations are obvious. They even provide the bolts to attach all the drives along with a wrench in the lockable front door, how thoughtful is that!

I then installed the system with Debian squeeze from the optical drive. Principally because I happened to have a network installer CD to hand although the BIOS does have network boot capability.

I used the installer to put the whole initial system together and did not have to resort to the command line even once, very impressed with how far D-I has come.

After asking several people for advice the general consensus was that I should create two partitions on each drive one for a RAID 1 /boot and one for a RAID 5 LVM area.

I did have to perform the entire install a second time because there is a gotcha with GUID Partition Table, RAID 1 boot drives and GRUB. You must have a small "BIOS" partition on the front of the drive or GRUB cannot install in the MBR and your system will not boot!

The partition layout I ended up with looks like:
Model: ATA SAMSUNG HD204UI (scsi)
Disk /dev/sda: 2000GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt

Number Start End Size File system Name Flags
1 17.4kB 32.0MB 32.0MB bios_grub
3 32.0MB 1000MB 968MB raid
2 1000MB 2000GB 1999GB raid

The small Gigabyte partition was configured as a RAID 1 across all four drives and formatted with ext2 and mount point of /boot. The large space was configured as RAID 5 across all four drives with LVM on top. Logical volumes were allocated formatted ext3 (on advice from seevral people about ext4 instability they had observed) for 50 GiB root, 4 GiB swap and 1 TiB home space.

The normal Debian install proceeded and after the post install reboot I was presented with a login prompt. Absolutely no surprises at all no additional drivers required and a correctly running system.

Over the next few days I did the usual sysadmin stuff, rsynced data from the old fileserver including creating logical volumes for the various arrays from the old server none of which presented much of a problem. The 5.5TiB Raid 5 did however take a day or so to sync!

I used the microservers eSATA port to attach external drives I use for backup purposes which has also not been an issue so far.

I am currently running both the new and old systems for a few days and rsyncing data to the microserver until I am sure of it. Actually I will make the switch this weekend and shut the old system down and leave it till next weekend before I scrub the old drives.

Before I made it live I decided to run some benchmarks and gather some data just for interest.
Bonnie (Version 1.96) was run in the root logical volume (I repeated the tests in other volumes, there is sub 1% variation) the test used a 4GiB size and 16 files

Sequential OutputSequential InputRandom SeeksSequential CreateRandom Create
Per ChrBlockRewritePer ChrBlockCreateReadDeleteCreateReadDelete
/sec378K41M37M2216K330M412.811697+++++1833014246+++++14371
%CPU9711891301524+++2829+++22
Latency109ms681ms324ms116ms93389 s250ms29021 s814 s842 s362 s51 s61 s

Does not seem to be any notable issues there, the write speeds are a little lower than I might like but that is the cost of RAID 5 and 5400 RPM drives.

The rsync operations used to sync up the live data seem to manage just short of 20MiB/s for the home partition comprising of 250GiB in two and a half million files with the expected mix of file sizes. The video partition managed 33MiB/s on 1TiB of data in nine thousand files.

The bonnie tests were performed accessing the server over NFS with 24GiB size and 16 files.
Sequential OutputSequential InputRandom SeeksSequential CreateRandom Create
Per ChrBlockRewritePer ChrBlockCreateReadDeleteCreateReadDelete
/sec1733K29M19M4608K106M358.3146537142402157640821529
%CPU98249310108109997
Latency10894 s23242ms69159ms49772 s224ms250ms148ms24821 s157ms108ms2074 s719ms

or alternatively as percentages against the previous direct access values

Sequential OutputSequential InputRandom SeeksSequential CreateRandom Create
Per ChrBlockRewritePer ChrBlockCreateReadDeleteCreateReadDelete
/sec4646851213328712+++1311+++10
CPU1011850104347133+++3231+++31
Latency925121324893227795093049186462983440661178688

Not that that tells us much aside from that write is a bit slower over the network, read is gigabit network bandwidth limited and latency of disc over the network is generally poorer than direct.

In summary the total cost was 395 for a complete ready to use system with 5.5TiB of RAID 5 storage which can be NFS served at nearly 900Mbit/s. Overall I am happy with the result, my only real issue is the write performance is a little disappointing but it is good enough for what I need.

11 October 2011

Vincent Sanders: Sometimes I am just dumb

I have recently been working on some code for NetSurf which builds up an output buffer by repeatedly calling snprintf(); No great shock there, well understood trivial pattern that has been used repeatedly for ages.

Of course I discovered a buffer overflow, which to be fair had already been pointed out to me by another developer and I just failed to see it...Can I blame my old age? no? bah, get off my lawn!

Basically it boils down to me simply not seeing where C helpfully let me subtract one size_t typed value from another for a length and me completely forgetting that a negative result would simply become a large positive value...

I (erroneously) beleived snprintf took a (signed) int as the buffer length, of course it returns one, but it takes a size_t which is, of course unsigned.
Gosh I feel silly now, in fact I was so convinced I was right I wrote a program to "prove" it.
1 /* snprintf.c
2 *
3 * snprintf example
4 *
5 * Public domain (really I do not think its even copyrightable anyway)
6 *
7 * cc -Wall -o snprintf-ex snprintf-ex.c
8 */

9
10 #include <stdio.h>
11 #include <string.h>
12
13 #define SHOW printf("%3ld %.*s*%s\n", string_len - slen, (int)slen, string, string + strlen(string) + 1 )
14
15
16 int main(int argc, char**argv)
17
18 char string[64];
19 size_t string_len = sizeof(string) / 2;
20 int bloop;
21 size_t slen = 0;
22
23 /* initialise string */
24 for (bloop = 0; bloop < (sizeof(string) - 1); bloop++)
25 string[bloop] = '0' + (bloop % 10);
26
27 string[bloop] = 0; /* null terminate */
28
29 printf("%3ld %s\n", string_len, string);
30
31 /* try an empty string */
32 slen += snprintf(string + slen, string_len - slen, "%s", "");
33
34 SHOW;
35
36 /* blah blah */
37 slen += snprintf(string + slen, string_len - slen, "Lorem ipsum dolor");
38
39 SHOW;
40
41 /* this one should exceed the allowed length */
42 slen += snprintf(string + slen, string_len - slen, "Lorem ipsum dolor");
43
44 SHOW;
45
46 /* should not call snprintf up if slen exceeds string_len as the
47 * subtraction results in a negative length and snprintf takes a unisigned
48 * size!
49 */

50
51 /* this one starts exceeding the allowed length */
52 slen += snprintf(string + slen, string_len - slen, "Lorem ipsum dolor");
53
54 SHOW;
55
56 return 0;
57

Of course all this really proved was that I was wrong and I needed to clean up the original code as soon as possible.

A good lesson to learn here is that no matter how experienced you are, you can be mistaken and that, perhaps, some redemption I can take from this is I have matured enough as a programmer to write a test program to prove myself wrong!

6 October 2011

Vincent Sanders: Introduction to printing in another dimension

Mankind, it would appear, owe a great deal of their evolutionary advantage to using tools. This ability appears to have been massively amplified by our creation of machine tools.

A machine tool is widely defined to be a machine where the movement of the tool (the tool path) is not directly controlled by a human. One of the first known examples is a late 15th century lathe used to cut screw threads . The Industrial revolution was intimately interconnected with the creation of new machine tools and arguably by the mid 19th century all the distinct subtractive machine tool types had been discovered.

I ought to explain the word subtractive in this context, it is a pretty simple and rather arbitrary distinction (but important for this discussion). Traditional machining removes or subtracts material to obtain a finished item akin to a sculptor revealing the statute from within a block of stone by using a chisel and hammer. The corollary to this is, unsurprisingly, the additive process where material is added to create the finished item.

The machine tools from the 19th centuary were primarily single use devices controlled by gears and link mechanisms. Although the Jacquard loom was well known, because of the physical engineering difficulties, combining the concept with a machine tool to create a programmable tool path was not fully realised until the opening of the 20th century.

In the late 1940s electrical motors and punch cards/tape made machine tools Numerically Controlled (NC) and when computers arrived in the 60s we gained Computer Numerical Control (CNC) and the opportunity to completely screw things up with software became available.

With the advent of CNC additive systems became practical and by the late 1980s these machines were being widely used used for Rapid Prototyping.

The first additive systems generally used was the simple pen plotter which added ink on top of paper and became popular in draughting offices for producing blueprints etc. Though more generally thought of as computer printing technique plotters owe their heritage to CNC machines.

Next came prototyping systems based on layered object manufacture which cut shapes in a thin flat material (paper or plastic) and glued them together. These systems were expensive compared to casting processes (use a subtractive machine to make a mould and cast the part), extremely wasteful of source material and the results can be of variable quality. Systems based on this process are still manufactured and used.

Then came the stereolithography approach which scans a focused UV laser to cure resin and build up an object. There are several commercial machines available and even some home built systems but the costs of the resin have not yet made this approach generally cost effective.

Currently the most common commercial rapid prototyping additive systems are selective sintering processes where either an electron beam or a high power laser melt a layer of powdered material on a bed, the bed is lowered, more powder added and the process repeated. This process can use many different types of material and is very flexible as the power used can be plastic or metals. The quality is very high and high resolutions are available. Unfortunately these machines are expensive and generally start around 20,000 which puts them out of most individuals reach.

If anyone is still reading here is the summary of what we have covered so far:
  • Humans have used tools since they stopped being monkeys.
  • More than a century back we figured out how to make machines control the tools.
  • Fifty years back we made computers control the tools, before this all tools were subtractive.
  • In the last twenty years we have discovered several expensive ways to make objects with additive methods.
Now we get to the promise of the title, in the last few years Fused Filament Fabrication has become a viable option for a hobbyist. This method extrudes a thermoplastic through a nozzle and constructs an object one layer at a time from the bottom up.

The RepRap project at Bath university helped kickstart development of a plethora of practical operational 3D printers that can be built or bought. These machines are relatively inexpensive (starting from 400 if you build it yourself) and the feedstock is also reasonably inexpensive.

In another post I will discuss the actual practicalities of building and running one of these devices and looking at their software.



13 September 2011

Vincent Sanders: Electricity is really just organized lightning.

I have recently been working on a project that requires a 12V supply. Ordinarily this is no problem my selection of bench supplies are generally more than a match for anything I throw at them.

My TS3022S Bench SupplyThis project however needed a little more "oomph" than usual, specifically 200W more. Funnily enough a precision variable output bench supply capable of supplying 20A are rare and *very* expensive beasties.

So we turn to a fixed output supply, after all I will want to run my project without hogging my bench supplies anyway. These can be bought from various electronics suppliers like Farnell from around the 50 mark and Chinese imports from Ebay sellers start around the 20 mark.

All very well and good but that is money I was not planning on spending and possibly a month of waiting for an already badly delayed project. So I decided to Convert an old ATX PSU into a 12V source. This is not a new idea and a quick search revealed many suitable guides online. I had a quick skim, decided I did understand the general idea and ploughed ahead.

Wikipedia has a very useful page on the ATX standard complete with pinout diagrams and colour codes. The pile of grey box ATX supplies available on my shelf was examined and one was helpfully labelled with a sticker proclaiming 22A@12V and we had a winner.

Opening the case of the donor 450W CIT branded supply revealed a mostly empty enclosure with the usual basic switching arrangement. I removed most of the wire loom aside from two of each output voltage (3.3V, 5V and 12V i figured the other voltages might be useful in future) and three commons, the 3.3V and 5V sense lines were also kept. Each of these pairs were cut to length and leads were wired to 4mm sockets.

The "PWR_EN" line was wired via a toggle switch to ground so the output can be switched on and off easily. The 5V standby and a 5V output line were wired to a green/red bi-colour LED (via 270 current limit resistors) to give indication that mains is present and when the output is on.

Holes were drilled for four 4mm sockets an indicator LED and a switch. The connectors and switches were all mounted in the PSU casework. I plugged it all in, put an 8.2 load resistor on the 5V line with an ammeter in line and a voltmeter across the 12V rail.

ATX bench power supply I turned the mains on and the LED lit up green (5V standby worked) and when I flicked the output switch the LED turned orange, the 12V line went to 12V and the expected 0.6A flowed through the load resistor.

Basically, Success!

I have since loaded the supply up to the 200W operating load and nothing unexpected has happened so I am happy. Seems converting an ATX PSU is a perfectly good way of getting a 200W 12V supply and I can recommend it for anyone as cheap as me willing to put an hour or so into such a project.

21 August 2011

Vincent Sanders: A year of entropy

It has been a couple of years now since the release of the Entropy Key Around a year ago we finally managed to have enough stock on hand that I obtained a real production unit and installed it in my border router.

I installed the Debian packages, configured the ekeyd into EGD server mode and installed the EGD client packages on my other machines and pretty much forgot about it.

The recent new release of the ekey host software (version 1.1.4) reminded me that I had been quietly collecting statistics for almost a whole year and had some munin graphs to share.

The munin graphs of the generated output is pretty dull. Aside from the minor efficiency improvement in the 1.1.3 release installed mid December the generated rate has been a flat 3.93 Kilobytes a second.
The temperature sensor on the Entropy key shows a good correlation with the on-board CPU thermal sensors within the host system.
The host border router/server is a busy box which provides most network services including secure LDAP and SSL web services, it shows no sign of not having enough entropy at any point in the year.
The sites main file server and compile engine is a 4 core 8 gigabyte system with 12 drives. This system is heavily used with high load almost all the time but without the EGD client running has almost no entropy available.
The next system is my personal workstation. This machine often gets rebooted and is usually turned off overnight which is why there are gaps in the graph and odd discontinuities. Nonetheless entropy is always available just like the rest of my systems ;-)
And almost as a "control" here is a file server on the same network which has not been running EGD client (Ok, Ok already it was misconfigured and I am an idiot ;-)
In conclusion it seems an entropy key can keep at least this small network completely filled up with all the entropy it needs without much fuss. YAY!

31 May 2011

Vincent Sanders: Can you just...

I should have learned by now, no sentence that starts "Can you just" ever ends well. In my experience it means someone else has misunderstood the problem at hand. Then we proceed to the part of the project where (according to my lovely wife) I end up using my condescending voice.

I work through what I have been asked for and eventually, if it goes well you end up defining what the actual, real job needs to be done is. And almost envitably the "Can you just" has become a major job.

Most of us I fear recognise this "pattern" from our working lives with software. Well I am glad to report this pattern exists in real, physical world too.

Last week we took a trip to my parents in law, two thoroughly nice people (I lucked out, no evil mother in law here). I had been asked before I went "Can you just fix the garage door, it sticks". So I took along some basic tools expecting to lubricate a hinge or something.

Turns out it was the garage back door (for humans to get in and out) and...well there were bigger issues. The door frame was rotten and the door had pulled it away from the wall. So a new door frame you say? ah, well, yes

At some point in the past someone had fitted a double glazed window and had, kinda removed the lintle above the door and window! Yes there were several courses of brick masonry wall resting on top of a upvc window frame. The door frame had provided some support till it rotted and fell apart.

The window was under a huge strain and was actually 5cm shorter at one end than the other. The brickwork was no longer mortared and could better be described as a pile of bricks held together with caulking.

Vincent fitting the latch to the new door frame
So my bank holiday weekend was spent removing those bricks, making good, building a frame from 44x97mm planed timber bolted into the walls and covering it with weatherboard. OK it is not masonry but on the other hand it will not be falling on anyone's head any-time soon.

And before anyone comments, yes that frame is true, the spirit level says so. Alas that window frame is very, very wonky indeed and the wall it is sitting on is 4cm out too, so It looks a bit off.

Possibly not my best work but you can hang a couple of hundred kilos from the frame and it not budge so I think its solid enough for this purpose.

Providing my father in law keeps treating it with the wood preserver every couple of years it will not go rotten either and should last a long time.

18 February 2011

Vincent Sanders: Shedding

For some time now Melodie has wanted more outside storage.

The current outhouse is an 3 foot by 8 foot converted outside toilet. Due to its age (built 1884) this building is no longer watertight and is generally disintegrating at an alarming rate. One day soon it will have to be demolished. That day has not yet arrived, instead we purchased a plastic shed.

Unfortunately the only viable place for the new shed was next to the old one, this required removing a six foot section of flower bed complete with ivy, bamboo and an old sink.

Last Saturday I completed this removal and lay a concrete base ready to take the new shed. You would not think such a small area (2.8m square) would require so much material and effort to concrete over. 300Kg of 3:2:1 aggregate:sand:cement concrete mix went into the hole along with 100Kg of instant set concrete (for a rapid surface in the changable weather).

Thursday afternoon Geoff (my nice helpful neighbour) offered to assist me in the assembly of the shed. I re-arranged my work schedule (yay home working) and after three hours the shed was assembled.

This morning it occurred to me that my webcam had recorded a time-lapse movie of the construction. I uploaded it to YouTube and present it here for your amusement.

<iframe allowfullscreen="" frameborder="0" height="240" src="http://www.youtube.com/embed/brTXLO2R5D0?rel=0" title="YouTube video player" width="320"></iframe>

7 February 2011

Vincent Sanders: It is a bit breezy

The weather has been a bit odd round here for a while now. The snow storms in December and early January were a mild inconvenience for me but as I work from home the advice not to travel was not too much of a problem.

It seems however that now February is here and the snow is gone we are in for some pretty strong storms. This actually affected me today when my neighbours garden wall was blown over!

As you can see my nice new IP camera captured the event, well OK the frame before and after but you get the idea. Unfortunately for my neighbour the wall collapsed onto his pickup causing extensive damage.

A short time later when my weather station was recording gusts well over 50mph nearby drains started flowing the wrong way and it became a case of water, water everywhere!

It seems that when the new buildings were erected a few years ago that the architect while maximising used space on the building plot may have inadvertently created something of a wind tunnel.

The Gap between our properties is on a parallel (north to south) to the valley below. The wind seems to travel along the crest of the valley and be funnelled through any spaces between the houses. Fortunately the rest of the properties on green lane are pretty old and the spacing between is very generous and the funnelling effect is minimal.

I wonder if we could fit a wind turbine in there? Alas it was too much for my secondary anemometer which is now smashed in three parts.

Also gaining access to the gable end wall of my property has become somewhat perilous (Hence the wonky APT antenna I cannot get fixed). Yes that really is a guy balancing on a ladder 10m high in a strong wind. And indeed the platform the ladder is resting on is built from scaffolding board wedged between the houses.

I guess the hospital emergency room being 300m away means medical assistance is on hand, even so he is braver than I am. So my weather satellite imagery will just have to come from the internet like everyone else's for a while.

6 December 2010

Vincent Sanders: New Video Camera

Last week they boys were playing with their remote control car in the snow (which was fun) and Alex wanted to record what his car saw. I immediately dissuaded him from the idea that he can use the family's DV camcorder taped to his car!

The camera and a UK penny
Later on that day though I saw a rocket project on LMR which used a micro camera and suggested such cameras were available from ebay very cheaply. I did a quick search and ordered on from a UK seller at 15 plus 2.99 pnp and thought no more of it.

This afternoon the camera arrived and it really is tiny and Alex is already scheming of ways to use it in addition to attaching it to his RC car.

The video output is low quality (very blurry in low light) and I have yet to figure out how to disable the time stamp (which is wrong) but it does indeed record video to the storage and can download it via USB and played using VLC.

So if you want a tiny video camera (and an 8Gig micro SD card) which is so cheap you do not care if it gets broken, I can recommend these.

<object class="BLOG_video_class" classid="clsid:D27CDB6E-AE6D-11cf-96B8-444553540000" codebase="http://download.macromedia.com/pub/shockwave/cabs/flash/swflash.cab#version=6,0,40,0" height="266" id="BLOG_video-e10aebbc1dd937ed" width="320"><param name="movie" value="http://www.youtube.com/get_player"><param name="bgcolor" value="#FFFFFF"><param name="allowfullscreen" value="true"><param name="flashvars" value="flvurl=http%3A%2F%2Fv7.nonxt4.googlevideo.com%2Fvideoplayback%3Fid%3De10aebbc1dd937ed%26itag%3D5%26app%3Dblogger%26ip%3D0.0.0.0%26ipbits%3D0%26expire%3D1293804875%26sparams%3Did%252Citag%252Cip%252Cipbits%252Cexpire%26signature%3D4F186A3751631F2931BE78DB408398983676DB8E.2FDDADC2413DD729039774AA440EAB1C95EA068A%26key%3Dck1&amp;iurl=http%3A%2F%2Fvideo.google.com%2FThumbnailServer2%3Fapp%3Dblogger%26contentid%3De10aebbc1dd937ed%26offsetms%3D5000%26itag%3Dw160%26sigh%3Dx99eVmNAybTlHHC3EkTV9GH9eUI&amp;autoplay=0&amp;ps=blogger"><embed allowfullscreen="true" bgcolor="#FFFFFF" flashvars="flvurl=http%3A%2F%2Fv7.nonxt4.googlevideo.com%2Fvideoplayback%3Fid%3De10aebbc1dd937ed%26itag%3D5%26app%3Dblogger%26ip%3D0.0.0.0%26ipbits%3D0%26expire%3D1293804875%26sparams%3Did%252Citag%252Cip%252Cipbits%252Cexpire%26signature%3D4F186A3751631F2931BE78DB408398983676DB8E.2FDDADC2413DD729039774AA440EAB1C95EA068A%26key%3Dck1&amp;iurl=http%3A%2F%2Fvideo.google.com%2FThumbnailServer2%3Fapp%3Dblogger%26contentid%3De10aebbc1dd937ed%26offsetms%3D5000%26itag%3Dw160%26sigh%3Dx99eVmNAybTlHHC3EkTV9GH9eUI&amp;autoplay=0&amp;ps=blogger" height="266" src="http://www.youtube.com/get_player" type="application/x-shockwave-flash" width="320"></object>

29 November 2010

Vincent Sanders: Mobile Telecommunication Luddite

Actually I can not really be called a Luddite because I am not really against telecommunication progress nor do I fear it will negatively effect my employment...but the title sounded good? ;-)
Anyhow, I have a strange relationship with mobile phones. My ability to have a functioning device has historically limited their usefulness to me.

Because of my low usage and odd attitude for a techie I have always used PAYG for my personal phone. Work may have provided me with a device with a contract for being on-call etc. but in general its been PAYG all the way. My first phone was a Nokia 1610 back in the late nineties, second user after my employer at the time contract upgraded and had a load of "leftovers". I paid 50quid for it and bought a ten quid SIM and ten pounds credit.

My phones
The standing joke among my friends for the next decade has been that whatever provider I moved to they would go out of business within the year. I went through about eight providers in ten years. And for the first six the same tenner credit went with me! Each time the PAYG provider folded etc. I would be moved on to someone new with a tenners credit and a new SIM and number.

After Easy Mobile closed they did not have an option of a new provider with credit and the "recommended" provider was very poor, so this time I shopped around and went with Tesco mobile but remembered to take my number, which did at least stop my colleagues making fun of me for another move.

During this period my phones were no better than my providers. I bought a Nokia 1100 and used other peoples leftovers. Culminating in Daniel Silverstone taking pity on me and giving me his Sony Ericsson K800 at the end of his contract, despite acquiring an ADP1 and a G1 (both have which have dropped dead) this is the phone I have been using for three years now.

Due to my dreadful relationship I did not get the most from the technology and felt like I was missing out. Over the last few years to try and address this I have set myself a target of having a phone physically with me, turned on and in credit at all times. This I have finally managed for a whole six month stretch and as a reward I have bought myself a nice Android based smartphone on contract with T-Mobile.

I have done all the administrative things to port the number so no-one will need to alter their address books :-)

After only a few days of usage I have already discovered why the combination of smart phone and decent contract are so appealing. The freedom to just call and text and use the internet wherever you are without stopping to worry if you have enough credit is a wonderful thing. And decent hardware with the guarantee that if it breaks all I have to do is go into the store and they give me a new one.

I went for the HTC Wildfire instead of the Desire on cost grounds (100 pounds up front instead of 290) which seems perfectly reasonable hardware performance wise. My one and only niggle is T-mobile have nobbled the media player so it only plays some mp3s and not oggs or flacs. No real challenge, just a bit disappointing that vendurs seem to think they need to fiddle.


5 November 2010

Vincent Sanders: Keeping kindling dry

I, along with a great number of people I know, now posses a 3rd generation Kindle. It seems Amazon have found a feature set and price point which makes this device a winning solution.

My bookshelf complete with covered kindle
I did look at a huge number of alternatives like the Sony PRS600 and others but they were all more expensive than the 110 for the Kindle and did not have enough features to make a compelling argument for spending more.

Yes it has DRM. Yes it "only" supports PDF, MOBI and mp3. Yes it will not win any style or usability awards. But I went into this eyes open the device is "good enough".

The device lets me read books from a reasonable display. The integration with amazon.com is so seamless it poses a serious danger to my bank account. I should expand on that last point :-) Amazon have got the whole spending money for a book thing executed so well that you do not think twice about a couple of pounds here and there, this soon adds up. I have set myself a rigid budget.

My main complaints are really just niggles:

  • Another different USB connector! Wahh, I thought everyone had agreed on mini USB? seems that I now have to have yet another lead for micro USB

  • The commercial book selection is a bit limited and missing a surprising number of popular titles. Some of this appears to be the publishers and authors simply clinging to their old business model. I fear some of them might not survive and early indications are they are behaving like the music industry did...Guys you are selling an infinite good a scarcity model is going to fail!

  • The price of some of the books is absurd...they are asking hardback prices for the electronic edition! Seriously? how on earth can that possibly be justified? I can see that a hardback book with its print run could cost 5 per physical item (going from hulu print on demand prices as a worst case) plus shipping and stocking fees. So how can you possibly justify charging the same price for a pile of bits where none of that applies? Also the pile of bits cannot be lent or sold, not impressed.

  • eBook formatting is generally dreadful. I do not know who is mastering these books but they need to do a better job. If they tried to pull this in the physical editions they would get a seriously large number of returns.

  • I still have to pay for whispernet delivery fees even though, because its the wi-fi model, I am providing the bandwidth myself. I can see that differentiating between 3G and wi-fi delivery is a bit hard for them though.
However my one and only real complaint with the offering as a whole is the astronomical asking price for the leather cover. The cover is currently 25% of the price of the kindle itself! ( 30 cover 110 kindle) which is just silly. It is a pretty nice cover and the clever clip attachment means it does offer an integrated solution to protecting your kindle, but not 30 nice.

Kindle in a sock cover
So my lovely wife (her kindle was bought with the cover) made me a sock for mine. This is great for casual round the house usage to stop me scuffing the screen but was a bit lightweight for protecting the kindle when out and about.

One day last week I had an idea. I would make my own protective cover by crafting something I had wanted to do for ages. And the (unoriginal I am sure) project of a hollowed out book for housing my kindle was implemented.


My hollowed out book kindle cover
A quick Google later and I had a set of plausible instructions to follow. I used possibly the most out of date book ever (published 1981) on electronic test equipment, partly because it was a ex library sell off book which cost 10pence back in 1995 but mainly because it was the right size to just enclose the kindle without adding to much size.

I learnt a couple of things doing this:
  • Do not let your pva (white) glue mix get too runny, you want it fluid enough to be easily absorbed but not watery - this is important because otherwise the paper absorbs too much water and crinkles
  • Do not use a book where the binding has gone bad already and select a "clean" book. The spine of this book was yellowed and cracking before I started. This means the book spine simply cracks open at the hollowed out bit and it is very obvious.
  • Work out where the "solid" part at the back is going to be and treat that separately so you get a nice solid base at the back of the hole. In mine its not all stuck together and is a bit wavy. Do be sure you left enough depth for the kindle though.
  • Take your time and be careful with the glue, it is amazing how obvious even a simple splash of glue in the wrong place is. Use a small brush for this a paint brush is fast but sloppy.
  • Measure carefully and cut only a few pages at a time, it takes a bit longer but looks much better. Also I did not drill the corners of my hole which means they are a little scruffy.
  • Use the sharpest thinnest knife you can, this really helps. I started with a small stanley knife but switching to my hobby scalpel gave much better results.
  • If you have some, use woodworking clamps to clamp a bit of timber (I had some offcuts of shelving) around the book to compress it while the glue dries. Do not clamp the spine if you can avoid it. This method ensures:
    1. Heavy things do not fall off the book while it dries.
    2. An even strong pressure is applied.
    3. The book does not warp or bend while the glue dries
All in all I kinda like the results and I think I will try again with a more modern book where the spine is not so broken to begin with.

Next.

Previous.